29 research outputs found

    Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning

    Get PDF
    Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods

    Human Motion Tracking by Multiple RGBD Cameras

    No full text

    Robust One-Shot Segmentation of Brain Tissues via Image-Aligned Style Transformation

    No full text
    One-shot segmentation of brain tissues is typically a dual-model iterative learning: a registration model (reg-model) warps a carefully-labeled atlas onto unlabeled images to initialize their pseudo masks for training a segmentation model (seg-model); the seg-model revises the pseudo masks to enhance the reg-model for a better warping in the next iteration. However, there is a key weakness in such dual-model iteration that the spatial misalignment inevitably caused by the reg-model could misguide the seg-model, which makes it converge on an inferior segmentation performance eventually. In this paper, we propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for robust one-shot segmentation of brain tissues. Specifically, we first utilize the reg-model to warp the atlas onto an unlabeled image, and then employ the Fourier-based amplitude exchange with perturbation to transplant the style of the unlabeled image into the aligned atlas. This allows the subsequent seg-model to learn on the aligned and style-transferred copies of the atlas instead of unlabeled images, which naturally guarantees the correct spatial correspondence of an image-mask training pair, without sacrificing the diversity of intensity patterns carried by the unlabeled images. Furthermore, we introduce a feature-aware content consistency in addition to the image-level similarity to constrain the reg-model for a promising initialization, which avoids the collapse of image-aligned style transformation in the first iteration. Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%. The source code is available at: https://github.com/JinxLv/One-shot-segmentation-via-IST

    Design, Synthesis and Antifungal Evaluation of Novel Pyrylium Salt In Vitro and In Vivo

    No full text
    Nowadays, discovering new skeleton antifungal drugs is the direct way to address clinical fungal infections. Pyrylium salt SM21 was screened from a library containing 50,240 small molecules. Several studies about the antifungal activity and mechanism of SM21 have been reported, but the structure–activity relationship of pyrylium salts was not clear. To explore the chemical space of antifungal pyrylium salt SM21, a series of pyrylium salt derivatives were designed and synthesized. Their antifungal activity and structure-activity relationships (SAR) were investigated. Compared with SM21, most of the synthesized compounds exhibited equivalent or improved antifungal activities against Candida albicans in vitro. The synthesized compounds, such as XY10, XY13, XY14, XY16 and XY17 exhibited comparable antifungal activities against C. albicans with MIC values ranging from 0.47 to 1.0 μM. Fortunately, a compound numbered XY12 showed stronger antifungal activities and lower cytotoxicity was obtained. The MIC of compound XY12 against C. albicans was 0.24 μM, and the cytotoxicity decreased 20-fold as compared to SM21. In addition, XY12 was effective against fluconazole-resistant C. albicans and other pathogenic Candida species. More importantly, XY12 could significantly increase the survival rate of mice with a systemic C. albicans infection, which suggested the good antifungal activities of XY12 in vitro and in vivo. Our results indicated that structural modification of pyrylium salts could lead to the discovery of new antifungal drugs

    Leveraging deep learning to identify calcification and colloid in thyroid nodules

    No full text
    Background: Both calcification and colloid in thyroid nodules are reflected as echogenic foci in ultrasound images. However, calcification and colloid have significantly different probabilities of malignancy. We explored the performance of a deep learning (DL) model in distinguishing the echogenic foci of thyroid nodules as calcification or colloid. Methods: We conducted a retrospective study using ultrasound image sets. The DL model was trained and tested on 30,388 images of 1127 nodules. All nodules were pathologically confirmed. The area under the receiver-operator characteristic curve (AUC) was employed as the primary evaluation index. Results: The YoloV5 (You Only Look Once Version 5) transfer learning model for thyroid nodules based on DL detection showed that the average sensitivity, specificity, and accuracy of distinguishing echogenic foci in the test 1 group (n = 192) was 78.41%, 91.36%, and 77.81%, respectively. The average sensitivity, specificity, and accuracy of the three radiologists were 51.14%, 82.58%, and 61.29%, respectively. The average sensitivity, specificity, and accuracy of distinguishing small echogenic foci in the test 2 group (n = 58) was 70.17%, 77.14%, and 73.33%, respectively. Correspondingly, the average sensitivity, specificity, and accuracy of the radiologists were 57.69%, 63.29%, and 59.38%. Conclusions: The study demonstrated that DL performed far better than radiologists in distinguishing echogenic foci of thyroid nodules as calcifications or colloid
    corecore